13 research outputs found

    Fusing Structure from Motion and Simulation-Augmented Pose Regression from Optical Flow for Challenging Indoor Environments

    Full text link
    The localization of objects is a crucial task in various applications such as robotics, virtual and augmented reality, and the transportation of goods in warehouses. Recent advances in deep learning have enabled the localization using monocular visual cameras. While structure from motion (SfM) predicts the absolute pose from a point cloud, absolute pose regression (APR) methods learn a semantic understanding of the environment through neural networks. However, both fields face challenges caused by the environment such as motion blur, lighting changes, repetitive patterns, and feature-less structures. This study aims to address these challenges by incorporating additional information and regularizing the absolute pose using relative pose regression (RPR) methods. The optical flow between consecutive images is computed using the Lucas-Kanade algorithm, and the relative pose is predicted using an auxiliary small recurrent convolutional network. The fusion of absolute and relative poses is a complex task due to the mismatch between the global and local coordinate systems. State-of-the-art methods fusing absolute and relative poses use pose graph optimization (PGO) to regularize the absolute pose predictions using relative poses. In this work, we propose recurrent fusion networks to optimally align absolute and relative pose predictions to improve the absolute pose prediction. We evaluate eight different recurrent units and construct a simulation environment to pre-train the APR and RPR networks for better generalized training. Additionally, we record a large database of different scenarios in a challenging large-scale indoor environment that mimics a warehouse with transportation robots. We conduct hyperparameter searches and experiments to show the effectiveness of our recurrent fusion method compared to PGO

    Green Sturgeon Physical Habitat Use in the Coastal Pacific Ocean

    Get PDF
    The green sturgeon (Acipenser medirostris) is a highly migratory, oceanic, anadromous species with a complex life history that makes it vulnerable to species-wide threats in both freshwater and at sea. Green sturgeon population declines have preceded legal protection and curtailment of activities in marine environments deemed to increase its extinction risk. Yet, its marine habitat is poorly understood. We built a statistical model to characterize green sturgeon marine habitat using data from a coastal tracking array located along the Siletz Reef near Newport, Oregon, USA that recorded the passage of 37 acoustically tagged green sturgeon. We classified seafloor physical habitat features with high-resolution bathymetric and backscatter data. We then described the distribution of habitat components and their relationship to green sturgeon presence using ordination and subsequently used generalized linear model selection to identify important habitat components. Finally, we summarized depth and temperature recordings from seven green sturgeon present off the Oregon coast that were fitted with pop-off archival geolocation tags. Our analyses indicated that green sturgeon, on average, spent a longer duration in areas with high seafloor complexity, especially where a greater proportion of the substrate consists of boulders. Green sturgeon in marine habitats are primarily found at depths of 20–60 meters and from 9.5–16.0°C. Many sturgeon in this study were likely migrating in a northward direction, moving deeper, and may have been using complex seafloor habitat because it coincides with the distribution of benthic prey taxa or provides refuge from predators. Identifying important green sturgeon marine habitat is an essential step towards accurately defining the conditions that are necessary for its survival and will eventually yield range-wide, spatially explicit predictions of green sturgeon distribution

    Domain Adaptation for Time-Series Classification to Mitigate Covariate Shift

    Full text link
    The performance of a machine learning model degrades when it is applied to data from a similar but different domain than the data it has initially been trained on. To mitigate this domain shift problem, domain adaptation (DA) techniques search for an optimal transformation that converts the (current) input data from a source domain to a target domain to learn a domain-invariant representation that reduces domain discrepancy. This paper proposes a novel supervised DA based on two steps. First, we search for an optimal class-dependent transformation from the source to the target domain from a few samples. We consider optimal transport methods such as the earth mover's distance, Sinkhorn transport and correlation alignment. Second, we use embedding similarity techniques to select the corresponding transformation at inference. We use correlation metrics and higher-order moment matching techniques. We conduct an extensive evaluation on time-series datasets with domain shift including simulated and various online handwriting datasets to demonstrate the performance

    Cross-Modal Common Representation Learning with Triplet Loss Functions

    Full text link
    Common representation learning (CRL) learns a shared embedding between two or more modalities to improve in a given task over using only one of the modalities. CRL from different data types such as images and time-series data (e.g., audio or text data) requires a deep metric learning loss that minimizes the distance between the modality embeddings. In this paper, we propose to use the triplet loss, which uses positive and negative identities to create sample pairs with different labels, for CRL between image and time-series modalities. By adapting the triplet loss for CRL, higher accuracy in the main (time-series classification) task can be achieved by exploiting additional information of the auxiliary (image classification) task. Our experiments on synthetic data and handwriting recognition data from sensor-enhanced pens show an improved classification accuracy, faster convergence, and a better generalizability

    Auxiliary Cross-Modal Representation Learning With Triplet Loss Functions for Online Handwriting Recognition

    No full text
    Cross-modal representation learning learns a shared embedding between two or more modalities to improve performance in a given task compared to using only one of the modalities. Cross-modal representation learning from different data types - such as images and time-series data (e.g., audio or text data) – requires a deep metric learning loss that minimizes the distance between the modality embeddings. In this paper, we propose to use the contrastive or triplet loss, which uses positive and negative identities to create sample pairs with different labels, for cross-modal representation learning between image and time-series modalities (CMR-IS). By adapting the triplet loss for cross-modal representation learning, higher accuracy in the main (time-series classification) task can be achieved by exploiting additional information of the auxiliary (image classification) task. We present a triplet loss with a dynamic margin for single label and sequence-to-sequence classification tasks. We perform extensive evaluations on synthetic image and time-series data, and on data for offline handwriting recognition (HWR) and on online HWR from sensor-enhanced pens for classifying written words. Our experiments show an improved classification accuracy, faster convergence, and better generalizability due to an improved cross-modal representation. Furthermore, the more suitable generalizability leads to a better adaptability between writers for online HWR

    Cryopreservation impairs 3-D migration and cytotoxicity of natural killer cells

    No full text
    Cryopreservation is standard protocol prior to using NK cells in immunotherapy. Here the authors show that cryopreservation substantially reduces the clinical utility of these cells owing to a defect in their motility, an effect that might account for failure to treat some cancers with NK cell immunotherapy
    corecore